15 research outputs found

    Implementation of genetic algorithm based fuzzy logic controller with automatic rule extraction in FPGA

    Get PDF
    A number of fuzzy logic controllers are being designed till now to replace complex, non-linear and huge controlling equipment in numerous industrial sectors. But the designing of these controllers requires thorough knowledge about the controlled process. For this purpose a highly experienced experts are required, which is not feasible all the time. Most of these processes are non-linear and depend on large number of parameters. Thus mathematical representation of these systems is an arduous line of work. This project addresses these problems by proposing using of genetic algorithm based Fuzzy Logic systems as controllers. The system includes algorithms which are run on a capable computing platform, to read an experimental data sheet obtained from experimental observations of the system and generate a fine tuned rule base that is to be used in the fuzzy logic controller hardware. The hardware is implemented in an FPGA. Transfer of synthesized rule base from the computer to the FPGA implementation and crisp output value back to the computer is done by UART. A graphical user interface is provided that runs on the computer

    Mitosis Detection Under Limited Annotation: A Joint Learning Approach

    Full text link
    Mitotic counting is a vital prognostic marker of tumor proliferation in breast cancer. Deep learning-based mitotic detection is on par with pathologists, but it requires large labeled data for training. We propose a deep classification framework for enhancing mitosis detection by leveraging class label information, via softmax loss, and spatial distribution information among samples, via distance metric learning. We also investigate strategies towards steadily providing informative samples to boost the learning. The efficacy of the proposed framework is established through evaluation on ICPR 2012 and AMIDA 2013 mitotic data. Our framework significantly improves the detection with small training data and achieves on par or superior performance compared to state-of-the-art methods for using the entire training data.Comment: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI

    Matching single cells across modalities with contrastive learning and optimal transport.

    Get PDF
    Understanding the interactions between the biomolecules that govern cellular behaviors remains an emergent question in biology. Recent advances in single-cell technologies have enabled the simultaneous quantification of multiple biomolecules in the same cell, opening new avenues for understanding cellular complexity and heterogeneity. Still, the resulting multimodal single-cell datasets present unique challenges arising from the high dimensionality and multiple sources of acquisition noise. Computational methods able to match cells across different modalities offer an appealing alternative towards this goal. In this work, we propose MatchCLOT, a novel method for modality matching inspired by recent promising developments in contrastive learning and optimal transport. MatchCLOT uses contrastive learning to learn a common representation between two modalities and applies entropic optimal transport as an approximate maximum weight bipartite matching algorithm. Our model obtains state-of-the-art performance on two curated benchmarking datasets and an independent test dataset, improving the top scoring method by 26.1% while preserving the underlying biological structure of the multimodal data. Importantly, MatchCLOT offers high gains in computational time and memory that, in contrast to existing methods, allows it to scale well with the number of cells. As single-cell datasets become increasingly large, MatchCLOT offers an accurate and efficient solution to the problem of modality matching

    Computational Immunohistochemistry: Recipes for Standardization of Immunostaining

    Get PDF
    Cancer diagnosis and personalized cancer treatment are heavily based on the visual assessment of immunohistochemically-stained tissue specimens. The precision of this assessment depends critically on the quality of immunostaining, which is governed by a number of parameters used in the staining process. Tuning of the staining-process parameters is mostly based on pathologists' qualitative assessment, which incurs inter- and intra-observer variability. The lack of standardization in staining across pathology labs leads to poor reproducibility and consequently to uncertainty in diagnosis and treatment selection. In this paper, we propose a methodology to address this issue through a quantitative evaluation of the staining quality by using visual computing and machine learning techniques on immunohistochemically-stained tissue images. This enables a statistical analysis of the sensitivity of the staining quality to the process parameters and thereby provides an optimal operating range for obtaining high-quality immunostains. We evaluate the proposed methodology on HER2-stained breast cancer tissues and demonstrate its use to define guidelines to optimize and standardize immunostaining

    Deep Learning of Entity-Guided Representations in Digital Pathology

    No full text
    Pathological examination is the gold standard for cancer diagnosis, prognosis, and therapeutic response predictions. Advancements in scanning technologies and an increased focus on precision medicine have paved the way for developing digital-pathology-based assessments. Digital pathology has enabled the digitization of microscopy slides into high-resolution whole-slide images and opened up opportunities for computational pathology (CP). CP aspires to alleviate the cumbersome and time-consuming routine workflow of pathologists by introducing computer-aided assistive tools. To this end, CP leverages computational techniques for automated exploration and extraction of meaningful information from histopathology images. The demand for CP has recently gained even more attention due to the growing incidence rate of diagnostic cases per year. The basis of a typical CP system is artificial intelligence, in particular, deep learning (DL) due to its recent large-scale success. Capability of DL to automatically extract and utilize informative representations from complex histopathology images in a data-driven manner have popularized its adoption in CP. Several DL methods have been developed to address various histopathology tasks, such as nuclei detection and characterization, tumor delineation, tissue grading and staging, and survival estimation. However, the clinical adoption of DL methods is inhibited by several challenges, including: (1) infeasibility of acquiring large high-quality annotated histopathology datasets for training models; (2) requirement of prohibitive computational resources for processing large whole-slide images; and (3) a lack of transparency and interpretability of DL decisions. Further, most DL models in CP are built based on convolutional neural networks (CNNs), which treat an image as a composition of multisets of pixels, to perform analyses in a pixel-paradigm. However, operating in pixel-paradigm induces several crucial bottlenecks, such as: (i) not being able to easily utilize tissue composition and well-established prior pathological knowledge, due to a disregard for histological entities, e. g., nuclei, cells, glands; (ii) an inability to simultaneously capture both local cell microenvironment and global tissue microenvironment; (iii) intensive computational requirements for operating on large whole-slide images; and (iv) non-straight-forward model interpretations due to the trained models not making diagnostic decision explicitly based on well-defined histological entities. This thesis aims to address the aforementioned challenges and limitations concerning DL methods in CP. The motivation herein is that the analysis of tissues should rely on the phenotype and topological distribution of their constituting histological entities. Therefore, the analytical paradigm is proposed to be shifted from conventional pixels to entities. A histopathology image is first transformed into an entity-guided representation, specifically an entity-graph. The nodes and edges of the graph denote comprehensible histological entities and entity-to-entity interactions, respectively. The local entity-level phenotypical properties are embedded in the nodes and the global tissue-microenvironment is captured by the graph topology. Subsequently, the advancements of DL techniques on graph-structured data, in particular Graph Neural Networks (GNNs), are leveraged to efficiently construct a relation-aware entity-graph-representation for addressing downstream histopathology tasks. Operating in the entity-paradigm enables the incorporation of task-relevant entity-level prior knowledge for comprehensive tissue modelling. Entity-graphs being more flexible and memory efficient, compared to pixel-based counterparts, can scale to images of arbitrary shapes and sizes. Further, interpreting an entity-graph-based model can highlight salient entities and interactions for model decisions, which the pathologists can directly comprehend. Relevance and superiority of learning on entity-guided tissue representations are established for a variety of histopathology tasks across several tissue types. The proposed entity-graphs encode different entity types, i. e., nuclei, tissue regions, and both; and include different graph topologies, i. e., uni-level, multi-level, hierarchical. Further, various entity-guided GNNs are proposed herein to tackle the challenges of: (1) learning from weak supervision and limited annotations; (2) processing histopathology images of arbitrary sizes; and (3) interpretability and explainability of model decisions in pathologist-friendly terminologies. Specifically, the proposed methodologies are applied for the following histopathology tasks: (a) supervised subtyping breast carcinoma tumor regions, (b) weakly-supervised simultaneous classification and semantic segmentation of prostate cancer needle biopsies, and (c) generating qualitative and quantitative interpretations of breast subtyping model decisions. The proposed methods achieve state-of-the-art performance for these tasks, and have been validated by domain-expert pathologists. The generalization ability of the proposed methods is also substantiated by classifying and segmenting prostate cancer biopsies from multiple data sources. In addition, a flexible open-source python library, HistoCartography, has been developed to facilitate effective graph analytics in digital histopathology

    Generative appearance replay for continual unsupervised domain adaptation

    No full text
    Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on three datasets with different organs and modalities, where it substantially outperforms existing techniques. Our code is available at: https://github.com/histocartography/generative-appearance-replay.ISSN:1361-8415ISSN:1361-842

    Quantitative microimmunohistochemistry for the grading of immunostains on tumour tissues

    Full text link
    Immunohistochemistry is the gold-standard method for cancer-biomarker identification and patient stratification. Yet, owing to signal saturation, its use as a quantitative assay is limited as it cannot distinguish tumours with similar biomarker-expression levels. Here, we introduce a quantitative microimmunochemistry assay that enables the acquisition of dynamic information, via a metric of the evolution of the immunohistochemistry signal during tissue staining, for the quantification of relative antigen density on tissue surfaces. We used the assay to stratify 30 patient-derived breast-cancer samples into conventional classes and to determine the proximity of each sample to the other classes. We also show that the assay enables the quantification of multiple biomarkers (human epidermal growth factor receptor, oestrogen receptor and progesterone receptor) in a standard breast-cancer panel. The integration of quantitative microimmunohistochemistry into current pathology workflows may lead to improvements in the precision of biomarker quantification
    corecore